33 research outputs found

    Enabling dynamics in face analysis

    Get PDF
    Most of the approaches in automatic face analysis rely solely on static appearance. However, temporal analysis of expressions reveals interesting patterns. For a better understanding of the human face, this thesis focuses on temporal changes in the face, and dynamic patterns of expressions. In addition to improving the state of the art in several areas of automatic face analysis, the present thesis introduces new and significant findings on facial dynamics. The contributions on temporal analysis and understanding of faces can be summarized as follows: 1) An accurate facial landmarking method is proposed to enable detailed analysis of facial movements; 2) Dynamic feature descriptors are introduced to reveal the temporal patterns of facial expressions; 3) Various frameworks are proposed to exploit temporal information and facial dynamics in expression spontaneity analysis, age estimation, and kinship verification; 4) An affect-responsive system is designed to create an adaptive application empowered by face-to-face human-computer interaction. We believe that affective technologies will shape the future by providing a more natural form of human-machine interaction. To this end, the proposed methods and ideas may lead to more efficient uses of the temporal information and dynamic features in face processing and affective computing

    Distinguishing Posed and Spontaneous Smiles by Facial Dynamics

    Full text link
    Smile is one of the key elements in identifying emotions and present state of mind of an individual. In this work, we propose a cluster of approaches to classify posed and spontaneous smiles using deep convolutional neural network (CNN) face features, local phase quantization (LPQ), dense optical flow and histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for micro-expression smile amplification along with three normalization procedures for distinguishing posed and spontaneous smiles. Although the deep CNN face model is trained with large number of face images, HOG features outperforms this model for overall face smile classification task. Using EVM to amplify micro-expressions did not have a significant impact on classification accuracy, while the normalizing facial features improved classification accuracy. Unlike many manual or semi-automatic methodologies, our approach aims to automatically classify all smiles into either `spontaneous' or `posed' categories, by using support vector machines (SVM). Experimental results on large UvA-NEMO smile database show promising results as compared to other relevant methods.Comment: 16 pages, 8 figures, ACCV 2016, Second Workshop on Spontaneous Facial Behavior Analysi

    Automatic Estimation of Taste Liking through Facial Expression Dynamics

    No full text
    The level of taste liking is an important measure for a number of applications such as the prediction of long-term consumer acceptance for different food and beverage products. Based on the fact that facial expressions are spontaneous, instant and heterogeneous sources of information, this paper aims to automatically estimate the level of taste liking through facial expression videos. Instead of using handcrafted features, the proposed approach deep learns the regional expression dynamics, and encodes them to a Fisher vector for video representation. Regional Fisher vectors are then concatenated, and classified by linear SVM classifiers. The aim is to reveal the hidden patterns of taste-elicited responses by exploiting expression dynamics such as the speed and acceleration of facial movements. To this end, we have collected the first large-scale beverage tasting database in the literature. The database has 2,970 videos of taste-induced facial expressions collected from 495 subjects. Our large-scale experiments on this database show that the proposed approach achieves an accuracy of 70.37 percent for distinguishing between three levels of taste-liking. Furthermore, we assess the human performance recruiting 45 participants, and show that humans are significantly less reliable for estimating taste appreciation from facial expressions in comparison to the proposed method

    A statistical method for 2D facial landmarking

    No full text
    Many facial-analysis approaches rely on robust and accurate automatic facial landmarking to correctly function. In this paper, we describe a statistical method for automatic facial-landmark localization. Our landmarking relies on a parsimonious mixture model of Gabor wavelet features, computed in coarse-to-fine fashion and complemented with a shape prior. We assess the accuracy and the robustness of the proposed approach in extensive cross-database conditions conducted on four face data sets (Face Recognition Grand Challenge, Cohn-Kanade, Bosphorus, and BioID). Our method has 99.33% accuracy on the Bosphorus database and 97.62% accuracy on the BioID database on the average, which improves the state of the art. We show that the method is not significantly affected by low-resolution images, small rotations, facial expressions, and natural occlusions such as beard and mustache. We further test the goodness of the landmarks in a facial expression recognition application and report landmarking-induced improvement over baseline on two separate databases for video-based expression recognition (Cohn-Kanade and BU-4DFE)
    corecore